Goto

Collaborating Authors

 erik brynjolfsson


The State of AI: Welcome to the economic singularity

MIT Technology Review

Bonus: If you're an subscriber, you can join David and Richard, alongside's editor in chief, Mat Honan, for an exclusive conversation live on Tuesday, December 9 at 1pm ET about this topic. Sign up to be a part here . Any far-reaching new technology is always uneven in its adoption, but few have been more uneven than generative AI. That makes it hard to assess its likely impact on individual businesses, let alone on productivity across the economy as a whole. At one extreme, AI coding assistants have revolutionized the work of software developers. Mark Zuckerberg recently predicted that half of Meta's code would be written by AI within a year.


Brief for the Canada House of Commons Study on the Implications of Artificial Intelligence Technologies for the Canadian Labor Force: Generative Artificial Intelligence Shatters Models of AI and Labor

Frank, Morgan R.

arXiv.org Artificial Intelligence

Exciting advances in generative artificial intelligence (AI) have sparked concern for jobs, education, productivity [1], and the future of work. As with past technologies, generative AI may not lead to mass unemployment. But, unlike past technologies, generative AI is creative, cognitive, and potentially ubiquitous which makes the usual assumptions of automation predictions ill-suited for today. Existing projections suggest that generative AI will impact workers in occupations that were previously considered immune to automation. As AI's full set of capabilities and applications emerge, policy makers should promote workers' career adaptability. This goal requires improved data on job separations and unemployment by locality and job titles in order to identify early-indicators for the workers facing labor disruption. Further, prudent policy should incentivize education programs to accommodate learning with AI as a tool while preparing students for the demands of the future of work.


AI experts on whether you should be "terrified" of ChatGPT - CBS News

#artificialintelligence

ChatGPT is artificial intelligence that writes for you, any kind of writing you like – letters, song lyrics, research papers, recipes, therapy sessions, poems, essays, outlines, even software code. And despite its clunky name (GPT stands for Generative Pre-trained Transformer), within five days of its launch, more than a million people were using it. How easy is it to use? Try typing in, "Write a limerick about the effect of AI on humanity." Or how about, "Tell the Goldilocks story in the style of the King James Bible." Microsoft has announced it will build the program into Microsoft Word. The first books written by ChatGPT have already been published.


Reflecting On "Artificial General Intelligence" And AI Sentience

#artificialintelligence

Intelligence comes in many forms. Octopuses are highly intelligent--and completely unlike humans. In case you haven't noticed, artificial intelligence systems have been behaving in increasingly astonishing ways lately. OpenAI's new model DALL-E 2, for instance, can produce captivating original images based on simple text prompts. Models like DALL-E are making it harder to dismiss the notion that AI is capable of creativity. Consider, for instance, DALL-E's imaginative rendition of "a hip-hop cow in a denim jacket recording a hit single in the studio."


On Artificial General Intelligence, AI Sentience, And Large Language Models

#artificialintelligence

Many forms of intelligence exist. Octopuses are highly intelligent--and completely unlike humans. In case you haven't noticed, artificial intelligence systems have been behaving in increasingly astonishing ways lately. OpenAI's new model DALL-E 2, for instance, can produce captivating original images based on simple text prompts. Models like DALL-E are making it harder to dismiss the notion that AI is capable of creativity. Consider, for instance, DALL-E's imaginative rendition of "a hip-hop cow in a denim jacket recording a hit single in the studio."


The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence

Brynjolfsson, Erik

arXiv.org Artificial Intelligence

In 1950, Alan Turing proposed an imitation game as the ultimate test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions indistinguishable from a human. Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds. But not all types of AI are human-like. In fact, many of the most powerful systems are very different from humans. So an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. Furthermore, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers.


Top 10 Must-Read IoT Books

#artificialintelligence

IOT Books Must Read: The Internet of Things (IoT), which refers to the networking of physical objects through the use of embedded sensors, actuators, and other devices that can collect or transmit information about the objects, is growing at a very fast pace. By 2019, the global IoT market is projected to be valued at more than 1.7 trillion US dollars, with the number of connected devices worldwide forecast to reach 20.35 billion in the same year. To better understand these emerging technologies and their implications, here are ten must-read books focusing on the IoT. The Amazon Way on IoT explains how the combination of sensors, cloud computing and machine learning can be used to improve customer experiences, drive operational improvements and build new business models. The book offers guidance through the maze of emerging technologies, customer experiences, and business models; key methods to success from Amazon's master playbook such as creating seamless customer experiences, process improvement and new business models and utilizing tools such as sensors, machine learning and cloud computing; and approaches to help organizations tackle the technology, business and internal challenges in innovating with the IoT.


Stanford Digital Economy Lab : AI & The Future of Work Conference

#artificialintelligence

The AI & The Future of Work Conference assembled a roster of visionary researchers, executives, and policy experts to examine and discuss the profound impact of AI and digital technologies on productivity, business, and policy. The Stanford Digital Economy Lab is a new venture within the Stanford Institute for Human-Centered Artificial Intelligence (HAI), co-sponsored by the Stanford Institute for Economic Policy Research. Our purpose is to research, measure, and understand the burgeoning digital economy and its far-reaching effects on how we live and work. We're led by Professor Erik Brynjolfsson, whose pioneering research examines the impact of information technologies on business strategy, productivity and performance, digital commerce, and intangible assets.


Digital Transformation and the 4IR - AI, Blockchain, IoT, Fintech

#artificialintelligence

Digital Transformation and the 4IR – AI, Blockchain, IoT, Fintech is not a steady, fixed concept. It has evolved swiftly, over the course of various years, by gradual improvements of the technology, particularly the Internet, which fostered increasing digitization. Jeremy Rifkin, a US economist, futurist, and sociologist, who has analysed the shifts in society due to technology over the course of various decades, has written widely about the topic. He was the first one to tackle the impact of digital technologies and how these were triggering what he saw as a profound industrial shift. He described that shift, which he called "third industrial revolution", in his book "The Third Industrial Revolution; How Lateral Power is Transforming Energy, the Economy, and the World" (2011).


Whoever leads in artificial intelligence in 2030 will rule the world until 2100

#artificialintelligence

To kick off the Future Development blog in 2020, we present the fourth in a four-part series on the future of development. A couple of years ago, Vladimir Putin warned Russians that the country that led in technologies using artificial intelligence will dominate the globe. He was right to be worried. Russia is now a minor player, and the race seems now to be mainly between the United States and China. But don't count out the European Union just yet; the EU is still a fifth of the world economy, and it has underappreciated strengths.